668 research outputs found

    Minimum Cuts in Near-Linear Time

    Full text link
    We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a ``semi-duality'' between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized algorithm that finds a minimum cut in an m-edge, n-vertex graph with high probability in O(m log^3 n) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O(n^2 log n) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O(n^2 log^3 n). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner

    Linear-Time Poisson-Disk Patterns

    Get PDF
    We present an algorithm for generating Poisson-disc patterns taking O(N) time to generate NN points. The method is based on a grid of regions which can contain no more than one point in the final pattern, and uses an explicit model of point arrival times under a uniform Poisson process.Comment: 4 pages, 2 figure

    Approximate Graph Coloring by Semidefinite Programming

    Full text link
    We consider the problem of coloring k-colorable graphs with the fewest possible colors. We present a randomized polynomial time algorithm that colors a 3-colorable graph on nn vertices with min O(Delta^{1/3} log^{1/2} Delta log n), O(n^{1/4} log^{1/2} n) colors where Delta is the maximum degree of any vertex. Besides giving the best known approximation ratio in terms of n, this marks the first non-trivial approximation result as a function of the maximum degree Delta. This result can be generalized to k-colorable graphs to obtain a coloring using min O(Delta^{1-2/k} log^{1/2} Delta log n), O(n^{1-3/(k+1)} log^{1/2} n) colors. Our results are inspired by the recent work of Goemans and Williamson who used an algorithm for semidefinite optimization problems, which generalize linear programs, to obtain improved approximations for the MAX CUT and MAX 2-SAT problems. An intriguing outcome of our work is a duality relationship established between the value of the optimum solution to our semidefinite program and the Lovasz theta-function. We show lower bounds on the gap between the optimum solution of our semidefinite program and the actual chromatic number; by duality this also demonstrates interesting new facts about the theta-function

    Does Confidence Reporting from the Crowd Benefit Crowdsourcing Performance?

    Full text link
    We explore the design of an effective crowdsourcing system for an MM-ary classification task. Crowd workers complete simple binary microtasks whose results are aggregated to give the final classification decision. We consider the scenario where the workers have a reject option so that they are allowed to skip microtasks when they are unable to or choose not to respond to binary microtasks. Additionally, the workers report quantized confidence levels when they are able to submit definitive answers. We present an aggregation approach using a weighted majority voting rule, where each worker's response is assigned an optimized weight to maximize crowd's classification performance. We obtain a couterintuitive result that the classification performance does not benefit from workers reporting quantized confidence. Therefore, the crowdsourcing system designer should employ the reject option without requiring confidence reporting.Comment: 6 pages, 4 figures, SocialSens 2017. arXiv admin note: text overlap with arXiv:1602.0057

    Wicked Problems and Gnarly Results: Reflecting on Design and Evaluation Methods for Idiosyncratic Personal Information Management Tasks

    No full text
    This paper is a case study of an artifact design and evaluation process; it is a reflection on how right thinking about design methods may at times result in sub-optimal results. Our goal has been to assess our decision making process throughout the design and evaluation stages for a software prototype in order to consider where design methodology may need to be tuned to be more sensitive to the domain of practice, in this case software evaluation in personal information management. In particular, we reflect on design methods around (1) scale of prototype, (2) prototyping and design process, (3) study design, and (4) study population

    Bringing the Semantic Web home: a research agenda for local, personalized SWUI

    No full text
    We suggest that by taking the Semantic Web local and personal, and deploying it as a shared "data sea" for all applications to trawl, new types of interaction are possible (even necessitated) with this heterogeneous source integration. We present a motivating scenario to foreground the kind of interaction we envision as possible, and outline a series of associated questions about data integration issues, and in particular about the interaction challenges fostered by these new possibilities. We sketch out some early approaches to these questions, but our goal is to identify a wider field of questions for the SWUI community in considering the implications of a local/social semantic web, not just a public one, for interaction

    Learning Classes Correlated to a Hierarchy

    Get PDF
    Trees are a common way of organizing large amounts of information by placing items with similar characteristics near one another in the tree. We introduce a classification problem where a given tree structure gives us information on the best way to label nearby elements. We suggest there are many practical problems that fall under this domain. We propose a way to map the classification problem onto a standard Bayesian inference problem. We also give a fast, specialized inference algorithm that incrementally updates relevant probabilities. We apply this algorithm to web-classification problems and show that our algorithm empirically works well

    Exact Algorithms for the Canadian Traveller Problem on Paths and Trees

    Get PDF
    The Canadian Traveller problem is a stochastic shortest paths problem in which one learns the cost of an edge only when arriving at one of its endpoints. The goal is to find an adaptive policy (adjusting as one learns more edge lengths) that minimizes the expected cost of travel. The problem is known to be #P hard. Since there has been no significant progress on approximation algorithms for several decades, we have chosen to seek out special cases for which exact solutions exist, in the hope of demonstrating techniques that could lead to further progress. Applying techniques from the theory of Markov Decision Processes, we give an exact solution for graphs of parallel (undirected) paths from source to destination with random two-valued edge costs. We also offer a partial generalization to traversing perfect binary trees
    • …
    corecore